killer.sh 문항
CKA Simulator A Kubernetes 1.32
Introduction
Each question needs to be solved on a specific instance other than your main candidate@terminal. You'll need to connect to the correct instance via ssh, and the command is provided before each question. To connect to a different instance, always return first to your main terminal by running the exit command.
Use sudo -i to become root on any node if necessary.
Question 1 | Contexts
Solve this question on: ssh cka9412
You're asked to extract the following information out of kubeconfig file /opt/course/1/kubeconfig on cka9412:
- Write all kubeconfig context names into
/opt/course/1/contexts, one per line - Write the name of the current context into
/opt/course/1/current-context - Write the client-certificate of user
account-0027base64-decoded into/opt/course/1/cert
Answer:
# Step 1: Get all context names
k --kubeconfig /opt/course/1/kubeconfig config get-contexts -oname > /opt/course/1/contexts
# Step 2: Query current context
k --kubeconfig /opt/course/1/kubeconfig config current-context > /opt/course/1/current-context
# Step 3: Extract certificate and decode it
k --kubeconfig /opt/course/1/kubeconfig config view --raw -ojsonpath="{.users[0].user.client-certificate-data}" | base64 -d > /opt/course/1/cert
Question 2 | MinIO Operator, CRD Config, Helm Install
Solve this question on: ssh cka7968
Install the MinIO Operator using Helm in Namespace minio. Then configure and create the Tenant CRD:
- Create Namespace
minio - Install Helm chart
minio/operatorinto the new Namespace. The Helm Release should be calledminio-operator - Update the
Tenantresource in/opt/course/2/minio-tenant.yamlto includeenableSFTP: trueunderfeatures - Create the
Tenantresource from/opt/course/2/minio-tenant.yaml
Answer:
# Step 1: Create namespace
k create ns minio
# Step 2: Install Helm chart
helm -n minio install minio-operator minio/operator
# Step 3: Update the yaml file
vim /opt/course/2/minio-tenant.yaml
# Add enableSFTP: true under features section
# Step 4: Create the Tenant resource
k -f /opt/course/2/minio-tenant.yaml apply
Question 3 | Scale down StatefulSet
Solve this question on: ssh cka3962
There are two Pods named o3db-* in Namespace project-h800. The Project H800 management asked you to scale these down to one replica to save resources.
Answer:
# Check the pods
k -n project-h800 get pod | grep o3db
# Verify they are managed by a StatefulSet
k -n project-h800 get deploy,ds,sts | grep o3db
# Scale down the StatefulSet
k -n project-h800 scale sts o3db --replicas 1
# Verify
k -n project-h800 get sts o3db
Question 4 | Find Pods first to be terminated
Solve this question on: ssh cka2556
Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory).
Write the Pod names into /opt/course/4/pods-terminated-first.txt.
Answer:
Look for Pods without resource requests defined as these will be the first candidates for termination:
# Manual approach
k -n project-c13 describe pod | less -p Requests
# Alternatively
k -n project-c13 describe pod | grep -A 3 -E 'Requests|^Name:'
# Or using jsonpath
k -n project-c13 get pod -o jsonpath="{range .items[*]} {.metadata.name}{.spec.containers[*].resources}{'\n'}"
# Or look for Quality of Service classes
k -n project-c13 get pods -o jsonpath="{range .items[*]}{.metadata.name} {.status.qosClass}{'\n'}"
Find the Pods with BestEffort QoS class and write their names into the file.
Question 5 | Kustomize configure HPA Autoscaler
Solve this question on: ssh cka5774
Using the Kustomize config at /opt/course/5/api-gateway do the following:
- Remove the ConfigMap
horizontal-scaling-configcompletely - Add HPA named
api-gatewayfor the Deploymentapi-gatewaywith min2and max4replicas. It should scale at50%average CPU utilisation - In prod the HPA should have max
6replicas - Apply your changes for staging and prod so they're reflected in the cluster
Answer:
# Explore the base config
cd /opt/course/5/api-gateway
k kustomize base
# Explore staging overlay
k kustomize staging
# Explore prod overlay
k kustomize prod
# Step 1: Remove ConfigMap from base, staging, and prod config files
# Edit base/api-gateway.yaml, staging/api-gateway.yaml, prod/api-gateway.yaml
# Step 2: Add HPA to base config
# Add HPA to base/api-gateway.yaml with min 2, max 4 replicas scaling at 50% CPU
# Step 3: Override max replicas in prod
# Add HPA to prod/api-gateway.yaml with maxReplicas: 6
# Step 4: Apply changes
k kustomize staging | kubectl apply -f -
k kustomize prod | kubectl apply -f -
# Clean up remaining ConfigMaps
k -n api-gateway-staging delete cm horizontal-scaling-config
k -n api-gateway-prod delete cm horizontal-scaling-config
Question 6 | Storage, PV, PVC, Pod volume
Solve this question on: ssh cka7968
Create resources for storage:
- Create a new PersistentVolume named
safari-pvwith 2Gi capacity, ReadWriteOnce access, hostPath/Volumes/Data, and no storageClassName - Create a new PersistentVolumeClaim in Namespace
project-t230namedsafari-pvcrequesting 2Gi storage and ReadWriteOnce access with no storageClassName - Create a new Deployment
safariin Namespaceproject-t230that mounts that volume at/tmp/safari-datausing the imagehttpd:2-alpine
Answer:
Create and apply the required YAML files:
# Create PV YAML
vim 6_pv.yaml
# Apply it
k -f 6_pv.yaml create
# Create PVC YAML
vim 6_pvc.yaml
# Apply it
k -f 6_pvc.yaml create
# Create Deployment YAML
k -n project-t230 create deploy safari --image=httpd:2-alpine --dry-run=client -o yaml > 6_dep.yaml
# Edit to add volume mount
vim 6_dep.yaml
# Apply it
k -f 6_dep.yaml create
# Verify the mounting
k -n project-t230 describe pod <pod-name> | grep -A2 Mounts:
Question 7 | Node and Pod Resource Usage
Solve this question on: ssh cka5774
The metrics-server has been installed in the cluster. Write two bash scripts which use kubectl:
- Script
/opt/course/7/node.shshould show resource usage of Nodes - Script
/opt/course/7/pod.shshould show resource usage of Pods and their containers
Answer:
# First script: node.sh
echo 'kubectl top node' > /opt/course/7/node.sh
# Second script: pod.sh
echo 'kubectl top pod --containers=true' > /opt/course/7/pod.sh
Question 8 | Update Kubernetes Version and join cluster
Solve this question on: ssh cka3962
Your coworker notified you that node cka3962-node1 is running an older Kubernetes version and is not even part of the cluster yet.
- Update the node's Kubernetes to the exact version of the controlplane
- Add the node to the cluster using kubeadm
Answer:
# Check control plane version
k get node
# Shows v1.32.1
# SSH to the worker node
ssh cka3962-node1
sudo -i
# Check current versions
kubectl version
kubelet --version
kubeadm version
# Update kubelet and kubectl
apt update
apt install kubectl=1.32.1-1.1 kubelet=1.32.1-1.1
# Restart kubelet
service kubelet restart
# On control plane, generate join command
ssh cka3962
sudo -i
kubeadm token create --print-join-command
# Join node to cluster
ssh cka3962-node1
sudo -i
kubeadm join 192.168.100.31:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
# Verify
kubectl get nodes
Question 9 | Contact K8s Api from inside Pod
Solve this question on: ssh cka9412
There is ServiceAccount secret-reader in Namespace project-swan. Create a Pod of image nginx:1-alpine named api-contact which uses this ServiceAccount.
Exec into the Pod and use curl to manually query all Secrets from the Kubernetes Api.
Write the result into file /opt/course/9/result.json.
Answer:
# Create Pod with ServiceAccount
k run api-contact --image=nginx:1-alpine --dry-run=client -o yaml > 9.yaml
# Edit YAML to add serviceAccountName and namespace
vim 9.yaml
# Apply it
k -f 9.yaml apply
# Exec into the Pod and query the API
k -n project-swan exec api-contact -it -- sh
# Inside the Pod:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -k https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}" > result.json
exit
# Copy results to required location
k -n project-swan exec api-contact -it -- cat result.json > /opt/course/9/result.json
Question 10 | RBAC ServiceAccount Role RoleBinding
Solve this question on: ssh cka3962
Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
Answer:
# Create ServiceAccount
k -n project-hamster create sa processor
# Create Role
k -n project-hamster create role processor --verb=create --resource=secret --resource=configmap
# Create RoleBinding
k -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor
# Verify permissions
k -n project-hamster auth can-i create secret --as system:serviceaccount:project-hamster:processor
k -n project-hamster auth can-i create configmap --as system:serviceaccount:project-hamster:processor
k -n project-hamster auth can-i create pod --as system:serviceaccount:project-hamster:processor
Question 11 | DaemonSet on all Nodes
Solve this question on: ssh cka2556
Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.
Answer:
# Create YAML template from Deployment
k -n project-tiger create deployment --image=httpd:2.4-alpine ds-important --dry-run=client -o yaml > 11.yaml
# Edit to make it a DaemonSet with required specs and tolerations
vim 11.yaml
# Apply
k -f 11.yaml create
# Verify
k -n project-tiger get ds
k -n project-tiger get pod -l id=ds-important -o wide
Question 12 | Deployment on all Nodes
Solve this question on: ssh cka2556
Implement the following in Namespace project-tiger:
- Create a Deployment named
deploy-importantwith3replicas - The Deployment and its Pods should have label
id=very-important - First container named
container1with imagenginx:1-alpine - Second container named
container2with imagegoogle/pause - There should only ever be one Pod of that Deployment running on one worker node, use
topologyKey: kubernetes.io/hostnamefor this
Answer:
Create a Deployment with PodAntiAffinity to ensure only one Pod per node:
# Create Deployment template
k -n project-tiger create deployment --image=nginx:1-alpine deploy-important --dry-run=client -o yaml > 12.yaml
# Edit YAML to add required specs and anti-affinity
vim 12.yaml
# Apply
k -f 12.yaml create
# Verify
k -n project-tiger get deploy -l id=very-important
k -n project-tiger get pod -o wide -l id=very-important
Question 13 | Gateway Api Ingress
Solve this question on: ssh cka7968
The team from Project r500 wants to replace their Ingress with a Gateway Api solution. The old Ingress is available at /opt/course/13/ingress.yaml.
Perform the following in Namespace project-r500 and for the already existing Gateway:
- Create a new HTTPRoute named
traffic-directorwhich replicates the routes from the old Ingress - Extend the new HTTPRoute with path
/autowhich redirects to mobile if the User-Agent is exactlymobileand to desktop otherwise
Answer:
# Investigate existing Gateway and CRDs
k get crd
k get gateway -A
k get gatewayclass -A
k -n project-r500 get gateway main -oyaml
# Investigate the Ingress to convert
vim /opt/course/13/ingress.yaml
# Create HTTPRoute YAML
vim http-route.yaml
# Add required routes and rules
# Apply and test
k apply -f http-route.yaml
curl r500.gateway:30080/desktop
curl r500.gateway:30080/mobile
curl r500.gateway:30080/auto -H "User-Agent: mobile"
curl r500.gateway:30080/auto
Question 14 | Check how long certificates are valid
Solve this question on: ssh cka9412
Perform some tasks on cluster certificates:
- Check how long the kube-apiserver server certificate is valid using openssl or cfssl. Write the expiration date into
/opt/course/14/expiration. Run thekubeadmcommand to list the expiration dates and confirm both methods show the same one - Write the
kubeadmcommand that would renew the kube-apiserver certificate into/opt/course/14/kubeadm-renew-certs.sh
Answer:
# Find the certificate
sudo -i
find /etc/kubernetes/pki | grep apiserver
# Check expiration with openssl
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep Validity -A2
# Write expiration to file
echo "Oct 29 14:19:27 2025 GMT" > /opt/course/14/expiration
# Check with kubeadm
kubeadm certs check-expiration | grep apiserver
# Write renewal command to file
echo "kubeadm certs renew apiserver" > /opt/course/14/kubeadm-renew-certs.sh
Question 15 | NetworkPolicy
Solve this question on: ssh cka7968
There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:
- Connect to
db1-*Pods on port1111 - Connect to
db2-*Pods on port2222
Answer:
# Check existing Pods and labels
k -n project-snake get pod
k -n project-snake get pod -L app
k -n project-snake get pod -o wide
# Test current connectivity
k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111
k -n project-snake exec backend-0 -- curl -s 10.44.0.23:2222
k -n project-snake exec backend-0 -- curl -s 10.44.0.22:3333
# Create NetworkPolicy YAML
vim 15_np.yaml
# Apply
k -f 15_np.yaml create
# Verify
k -n project-snake exec backend-0 -- curl -s 10.44.0.25:1111
k -n project-snake exec backend-0 -- curl -s 10.44.0.23:2222
k -n project-snake exec backend-0 -- curl -s 10.44.0.22:3333
Question 16 | Update CoreDNS Configuration
Solve this question on: ssh cka5774
The CoreDNS configuration in the cluster needs to be updated:
- Make a backup of the existing configuration Yaml and store it at
/opt/course/16/coredns_backup.yaml - Update the CoreDNS configuration in the cluster so that DNS resolution for
SERVICE.NAMESPACE.custom-domainwill work exactly like and in addition toSERVICE.NAMESPACE.cluster.local
Answer:
# Check CoreDNS setup
k -n kube-system get deploy,pod
# Backup the ConfigMap
k -n kube-system get cm coredns -oyaml > /opt/course/16/coredns_backup.yaml
# Edit the ConfigMap
k -n kube-system edit cm coredns
# Add custom-domain to the kubernetes plugin line
# Restart CoreDNS
k -n kube-system rollout restart deploy coredns
# Test the configuration
k run bb --image=busybox:1 -- sh -c 'sleep 1d'
k exec -it bb -- sh
# Inside the Pod:
nslookup kubernetes.default.svc.custom-domain
nslookup kubernetes.default.svc.cluster.local
Question 17 | Find Container of Pod and check info
Solve this question on: ssh cka2556
In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.
Using command crictl:
- Write the ID of the container and the
info.runtimeTypeinto/opt/course/17/pod-container.txt - Write the logs of the container into
/opt/course/17/pod-container.log
Answer:
# Create the Pod
k -n project-tiger run tigers-reunite --image=httpd:2-alpine --labels "pod=container,container=pod"
# Find the node
k -n project-tiger get pod -o wide
# Example output shows it's scheduled on cka2556-node1
# SSH to the node
ssh cka2556-node1
sudo -i
# Find the container
crictl ps | grep tigers-reunite
# Example output: ba62e5d465ff0 a7ccaadd632cf 2 minutes ago Running tigers-reunite ...
# Get container info
crictl inspect ba62e5d465ff0 | grep runtimeType
# Output: "runtimeType": "io.containerd.runc.v2",
# Create the first file
echo "ba62e5d465ff0 io.containerd.runc.v2" > /opt/course/17/pod-container.txt
# Get container logs
crictl logs ba62e5d465ff0 > /opt/course/17/pod-container.log
# The logs will contain Apache server startup messages
The file /opt/course/17/pod-container.log should contain the Apache httpd logs, which typically include server startup messages and configuration notices.
Preview Questions
Preview Question 1 | ETCD Information
Solve this question on: ssh cka9412
The cluster admin asked you to find out the following information about etcd running on cka9412:
- Server private key location
- Server certificate expiration date
- Is client certificate authentication enabled
Write these information into /opt/course/p1/etcd-info.txt
Answer:
# Find out how etcd is set up
k -n kube-system get pod
find /etc/kubernetes/manifests/
# Check etcd manifest to find configuration
vim /etc/kubernetes/manifests/etcd.yaml
# Check certificate expiration
openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/server.crt | grep Validity -A2
# Write to required file
cat > /opt/course/p1/etcd-info.txt << EOF
Server private key location: /etc/kubernetes/pki/etcd/server.key
Server certificate expiration date: Oct 29 14:19:27 2025 GMT
Is client certificate authentication enabled: yes
EOF
Preview Question 2 | Kube-Proxy iptables
Solve this question on: ssh cka2556
You're asked to confirm that kube-proxy is running correctly. For this perform the following in Namespace project-hamster:
- Create Pod
p2-podwith imagenginx:1-alpine - Create Service
p2-servicewhich exposes the Pod internally in the cluster on port3000->80 - Write the iptables rules of node
cka2556belonging the created Servicep2-serviceinto file/opt/course/p2/iptables.txt - Delete the Service and confirm that the iptables rules are gone again
Answer:
# Create Pod
k -n project-hamster run p2-pod --image=nginx:1-alpine
# Create Service
k -n project-hamster expose pod p2-pod --name p2-service --port 3000 --target-port 80
# Verify
k -n project-hamster get pod,svc,ep
# Check iptables rules
sudo -i
iptables-save | grep p2-service > /opt/course/p2/iptables.txt
# Delete Service and verify rules are gone
k -n project-hamster delete svc p2-service
iptables-save | grep p2-service
Preview Question 3 | Change Service CIDR
Solve this question on: ssh cka9412
- Create a Pod named
check-ipin Namespacedefaultusing imagehttpd:2-alpine - Expose it on port
80as a ClusterIP Service namedcheck-ip-service. Remember/output the IP of that Service - Change the Service CIDR to
11.96.0.0/12for the cluster - Create a second Service named
check-ip-service2pointing to the same Pod
Answer:
# Create Pod and expose it
k run check-ip --image=httpd:2-alpine
k expose pod check-ip --name check-ip-service --port 80
# Check the Service IP
k get svc
# Change Service CIDR in kube-apiserver manifest
sudo -i
vim /etc/kubernetes/manifests/kube-apiserver.yaml
# Change --service-cluster-ip-range=11.96.0.0/12
# Wait for kube-apiserver to restart
watch crictl ps
kubectl -n kube-system get pod | grep api
# Change controller manager config
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
# Change --service-cluster-ip-range=11.96.0.0/12
# Wait for restart
watch crictl ps
kubectl -n kube-system get pod | grep controller
# Create second Service
k expose pod check-ip --name check-ip-service2 --port 80
# Verify
k get svc
CKA Tips Kubernetes 1.32
Knowledge
- Study all topics in the curriculum thoroughly
- Practice with the CKA Simulator
- Setup aliases and become efficient with kubectl
- Learn from KillerCoda scenarios
- Understand Kubernetes components and troubleshooting
- Know advanced scheduling
- Be familiar with fixing components by comparing configurations
- Practice Kubeadm for cluster setup and node addition
- Know how to create Ingress resources
- Learn ETCD snapshot and restore procedures
CKA Exam Info
Kubernetes Documentation
Get familiar with the allowed documentation resources:
The Exam UI / Remote Desktop
- Runs on Ubuntu/Debian with XFCE desktop
- May experience some lag - use a good internet connection
- Pre-configured with kubectl, aliases, autocompletion
- Various tools pre-installed (yq, curl, wget, man)
- You can install additional tools like tmux or jq
Copy & Paste:
- Right mouse context menu always works
- In Terminal: Ctrl+Shift+c and Ctrl+Shift+v
- In other apps: Ctrl+c and Ctrl+v
PSI Bridge
- Exam is taken using PSI Secure Browser
- Multiple monitors no longer permitted
- Personal bookmarks not permitted
- Remote desktop provides all necessary tools
- Timer displays actual time remaining with alerts
Terminal Handling
Be fast
- Use
historycommand or Ctrl+r for command history - Use Ctrl+z to background tasks and
fgto bring them back - Delete pods quickly with
k delete pod x --grace-period 0 --force
Vim
-
Configure vim with
~/.vimrcor directly in vim:set tabstop=2set expandtabset shiftwidth=2 -
Toggle line numbers:
:set number/:set nonumber -
Copy/paste in vim:
- Mark lines: Esc+V (then arrow keys)
- Copy marked lines: y
- Cut marked lines: d
- Paste lines: p or P
-
Indent multiple lines:
- Set shiftwidth:
:set shiftwidth=2 - Mark lines: Shift+v and arrow keys
- Indent: > or <
- Repeat: .
- Set shiftwidth:
CKA Simulator A Kubernetes 1.32
Question 1 | Contexts
Solve this question on: ssh cka9412
Extract information from kubeconfig file:
# 1. Get all context names
k --kubeconfig /opt/course/1/kubeconfig config get-contexts -oname > /opt/course/1/contexts
# 2. Get current context
k --kubeconfig /opt/course/1/kubeconfig config current-context > /opt/course/1/current-context
# 3. Extract certificate and decode
k --kubeconfig /opt/course/1/kubeconfig config view --raw -ojsonpath="{.users[0].user.client-certificate-data}" | base64 -d > /opt/course/1/cert
Question 2 | MinIO Operator, CRD Config, Helm Install
Solve this question on: ssh cka7968
# 1. Create namespace
k create ns minio
# 2. Install helm chart
helm -n minio install minio-operator minio/operator
# 3. Update Tenant YAML
# Edit file and add enableSFTP: true
vim /opt/course/2/minio-tenant.yaml
# Add this under features:
# enableSFTP: true
# 4. Create Tenant resource
k -f /opt/course/2/minio-tenant.yaml apply
Question 3 | Scale down StatefulSet
Solve this question on: ssh cka3962
# Find StatefulSet
k -n project-h800 get pod | grep o3db
k -n project-h800 get sts | grep o3db
# Scale down
k -n project-h800 scale sts o3db --replicas 1
Question 4 | Find Pods first to be terminated
Solve this question on: ssh cka2556
# Find Pods without resource requests defined
k -n project-c13 describe pod | grep -A 3 -E 'Requests|^Name:'
# or
k -n project-c13 get pod -o jsonpath="{range .items[*]}{.metadata.name} {.status.qosClass}{'\n'}"
# Write pod names to file
echo "c13-3cc-runner-heavy-65588d7d6-djtv9map
c13-3cc-runner-heavy-65588d7d6-v8kf5map
c13-3cc-runner-heavy-65588d7d6-wwpb4map" > /opt/course/4/pods-terminated-first.txt
Question 5 | Kustomize configure HPA Autoscaler
Solve this question on: ssh cka5774
# 1. Remove ConfigMap - edit files to remove ConfigMap sections
# 2. Add HPA in base/api-gateway.yaml:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-gateway
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api-gateway
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
# 3. Override in prod/api-gateway.yaml:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-gateway
spec:
maxReplicas: 6
# 4. Apply changes
k kustomize staging | kubectl apply -f -
k kustomize prod | kubectl apply -f -
k -n api-gateway-staging delete cm horizontal-scaling-config
k -n api-gateway-prod delete cm horizontal-scaling-config
Question 6 | Storage, PV, PVC, Pod volume
Solve this question on: ssh cka7968
# Create PV
cat <<EOF > 6_pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: safari-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data"
EOF
k -f 6_pv.yaml create
# Create PVC
cat <<EOF > 6_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: safari-pvc
namespace: project-t230
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
EOF
k -f 6_pvc.yaml create
# Create Deployment
k -n project-t230 create deploy safari --image=httpd:2-alpine --dry-run=client -o yaml > 6_dep.yaml
# Edit to add volume mount:
# volumes:
# - name: data
# persistentVolumeClaim:
# claimName: safari-pvc
# containers:
# - ...
# volumeMounts:
# - name: data
# mountPath: /tmp/safari-data
k -f 6_dep.yaml create
Question 7 | Node and Pod Resource Usage
Solve this question on: ssh cka5774
# Create script for node resources
echo 'kubectl top node' > /opt/course/7/node.sh
# Create script for pod resources
echo 'kubectl top pod --containers=true' > /opt/course/7/pod.sh
Question 8 | Update Kubernetes Version and join cluster
Solve this question on: ssh cka3962
# Check control plane version
k get node
# SSH to worker node
ssh cka3962-node1
sudo -i
# Update kubelet and kubectl
apt update
apt install kubectl=1.32.1-1.1 kubelet=1.32.1-1.1
# On control plane, generate join command
ssh cka3962
sudo -i
kubeadm token create --print-join-command
# Copy the output command
# Join node to cluster
ssh cka3962-node1
sudo -i
kubeadm join 192.168.100.31:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Question 9 | Contact K8s Api from inside Pod
Solve this question on: ssh cka9412
# Create Pod with ServiceAccount
cat <<EOF > 9.yaml
apiVersion: v1
kind: Pod
metadata:
name: api-contact
namespace: project-swan
labels:
run: api-contact
spec:
serviceAccountName: secret-reader
containers:
- image: nginx:1-alpine
name: api-contact
EOF
k -f 9.yaml apply
# Exec into Pod and query API
k -n project-swan exec api-contact -it -- sh
# Inside Pod:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -k https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}" > result.json
exit
# Copy result to required location
k -n project-swan exec api-contact -it -- cat result.json > /opt/course/9/result.json
Question 10 | RBAC ServiceAccount Role RoleBinding
Solve this question on: ssh cka3962
# Create ServiceAccount
k -n project-hamster create sa processor
# Create Role
k -n project-hamster create role processor --verb=create --resource=secret --resource=configmap
# Create RoleBinding
k -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor
Question 11 | DaemonSet on all Nodes
Solve this question on: ssh cka2556
# Create DaemonSet YAML
cat <<EOF > 11.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-important
namespace: project-tiger
labels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
selector:
matchLabels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
template:
metadata:
labels:
id: ds-important
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
containers:
- image: httpd:2-alpine
name: ds-important
resources:
requests:
cpu: 10m
memory: 10Mi
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
EOF
k -f 11.yaml create
Question 12 | Deployment on all Nodes
Solve this question on: ssh cka2556
# Create Deployment with PodAntiAffinity
cat <<EOF > 12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-important
namespace: project-tiger
labels:
id: very-important
spec:
replicas: 3
selector:
matchLabels:
id: very-important
template:
metadata:
labels:
id: very-important
spec:
containers:
- image: nginx:1-alpine
name: container1
- image: google/pause
name: container2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: id
operator: In
values:
- very-important
topologyKey: kubernetes.io/hostname
EOF
k -f 12.yaml create
Question 13 | Gateway Api Ingress
Solve this question on: ssh cka7968
# Create HTTPRoute
cat <<EOF > http-route.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: traffic-director
namespace: project-r500
spec:
parentRefs:
- name: main
hostnames:
- "r500.gateway"
rules:
- matches:
- path:
type: PathPrefix
value: /desktop
backendRefs:
- name: web-desktop
port: 80
- matches:
- path:
type: PathPrefix
value: /mobile
backendRefs:
- name: web-mobile
port: 80
- matches:
- path:
type: PathPrefix
value: /auto
headers:
- type: Exact
name: user-agent
value: mobile
backendRefs:
- name: web-mobile
port: 80
- matches:
- path:
type: PathPrefix
value: /auto
backendRefs:
- name: web-desktop
port: 80
EOF
k -f http-route.yaml apply
Question 14 | Check how long certificates are valid
Solve this question on: ssh cka9412
# Check certificate expiration
sudo -i
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep Validity -A2
echo "Oct 29 14:19:27 2025 GMT" > /opt/course/14/expiration
# Write renewal command
echo "kubeadm certs renew apiserver" > /opt/course/14/kubeadm-renew-certs.sh
Question 15 | NetworkPolicy
Solve this question on: ssh cka7968
# Create NetworkPolicy
cat <<EOF > 15_np.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-backend
namespace: project-snake
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: db1
ports:
- protocol: TCP
port: 1111
- to:
- podSelector:
matchLabels:
app: db2
ports:
- protocol: TCP
port: 2222
EOF
k -f 15_np.yaml create
Question 16 | Update CoreDNS Configuration
Solve this question on: ssh cka5774
# Backup ConfigMap
k -n kube-system get cm coredns -oyaml > /opt/course/16/coredns_backup.yaml
# Edit ConfigMap
k -n kube-system edit cm coredns
# Change line:
# kubernetes cluster.local in-addr.arpa ip6.arpa {
# to:
# kubernetes custom-domain cluster.local in-addr.arpa ip6.arpa {
# Restart CoreDNS
k -n kube-system rollout restart deploy coredns
Question 17 | Find Container of Pod and check info
Solve this question on: ssh cka2556
# Create Pod
k -n project-tiger run tigers-reunite --image=httpd:2-alpine --labels "pod=container,container=pod"
# Find node and container
k -n project-tiger get pod tigers-reunite -o wide
ssh cka2556-node1
sudo -i
crictl ps | grep tigers-reunite
# Container ID will be something like: ba62e5d465ff0
# Write ID and runtimeType
crictl inspect ba62e5d465ff0 | grep runtimeType
echo "ba62e5d465ff0 io.containerd.runc.v2" > /opt/course/17/pod-container.txt
# Write logs
crictl logs ba62e5d465ff0 > /opt/course/17/pod-container.log
Question 17 | Find Container of Pod and check info
Solve this question on: ssh cka2556
# Create Pod
k -n project-tiger run tigers-reunite --image=httpd:2-alpine --labels "pod=container,container=pod"
# Find node and container
k -n project-tiger get pod tigers-reunite -o wide
ssh cka2556-node1
sudo -i
crictl ps | grep tigers-reunite
# Container ID will be something like: ba62e5d465ff0
# Write ID and runtimeType
crictl inspect ba62e5d465ff0 | grep runtimeType
echo "ba62e5d465ff0 io.containerd.runc.v2" > /opt/course/17/pod-container.txt
# Write logs
crictl logs ba62e5d465ff0 > /opt/course/17/pod-container.log
Preview Question 1 | ETCD Information
Solve this question on: ssh cka9412
# Find etcd configuration
sudo -i
find /etc/kubernetes/manifests/ -name "etcd.yaml"
cat /etc/kubernetes/manifests/etcd.yaml | grep -E 'key-file|cert-file|client-cert-auth'
# Check certificate expiration
openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/server.crt | grep Validity -A2
# Write requested information
cat > /opt/course/p1/etcd-info.txt << EOF
Server private key location: /etc/kubernetes/pki/etcd/server.key
Server certificate expiration date: Oct 29 14:19:27 2025 GMT
Is client certificate authentication enabled: yes
EOF
Preview Question 2 | Kube-Proxy iptables
Solve this question on: ssh cka2556
# Create Pod and Service
k -n project-hamster run p2-pod --image=nginx:1-alpine
k -n project-hamster expose pod p2-pod --name p2-service --port 3000 --target-port 80
# Check iptables rules
sudo -i
iptables-save | grep p2-service > /opt/course/p2/iptables.txt
# Delete Service and verify
k -n project-hamster delete svc p2-service
iptables-save | grep p2-service # Should return nothing
Preview Question 3 | Change Service CIDR
Solve this question on: ssh cka9412
# Create Pod and Service
k run check-ip --image=httpd:2-alpine
k expose pod check-ip --name check-ip-service --port 80
# Check Service IP
k get svc check-ip-service
# Update kube-apiserver configuration
sudo -i
vim /etc/kubernetes/manifests/kube-apiserver.yaml
# Change --service-cluster-ip-range=11.96.0.0/12
# Update kube-controller-manager configuration
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
# Change --service-cluster-ip-range=11.96.0.0/12
# Wait for components to restart
watch crictl ps
# Create second Service
k expose pod check-ip --name check-ip-service2 --port 80
# Verify new Service has IP from new CIDR
k get svc
CKA Tips Kubernetes 1.32
Knowledge
- Study all topics thoroughly from the curriculum
- Practice with CKA Simulator sessions
- Setup your bash aliases and be comfortable with kubectl
- Prepare for practical tasks involving K8s resource creation
- Learn through KillerCoda scenarios for CKA
Key Skills to Master
# Example of useful aliases
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
alias kc='kubectl create'
alias ka='kubectl apply -f'
# Set kubectl autocompletion
source <(kubectl completion bash)
complete -F __start_kubectl k
Terminal Efficiency
# Fast access to history
history | grep <command>
Ctrl+r # Reverse search in history
# Background tasks
kubectl delete pod mypod & # Run in background
Ctrl+z # Suspend current task
fg # Bring task to foreground
# Fast pod deletion
k delete pod mypod --grace-period 0 --force
# Multiple terminal sessions with tmux
tmux new -s mysession
tmux attach -t mysession
Vim Tips
# Create .vimrc with basic settings
cat > ~/.vimrc << EOF
set tabstop=2
set expandtab
set shiftwidth=2
EOF
# Toggle line numbers
# In vim, press Esc and type:
:set number # Show line numbers
:set nonumber # Hide line numbers
# Jump to specific line
# In vim, press Esc and type:
:42 # Jump to line 42
# Copy/paste operations
# Mark lines: Esc+V (then arrow keys)
# Copy marked: y
# Cut marked: d
# Paste: p or P